43 research outputs found

    One-Shot Federated Learning For LEO Constellations That Reduces Convergence Time From Days To 90 Minutes

    Get PDF
    A Low Earth orbit (LEO) satellite constellation consists of a large number of small satellites traveling in space with high mobility and collecting vast amounts of mobility data such as cloud movement for weather forecast, large herds of animals migrating across geo-regions, spreading of forest fires, and aircraft tracking. Machine learning can be utilized to analyze these mobility data to address global challenges, and Federated Learning (FL) is a promising approach because it eliminates the need for transmitting raw data and hence is both bandwidth and privacy friendly. However, FL requires many communication rounds between clients (satellites) and the parameter server (PS), leading to substantial delays of up to several days in LEO constellations. In this paper, we propose a novel one-shot FL approach for LEO satellites, called LEOShot, that needs only a single communication round to complete the entire learning process. LEOShot comprises three processes: (i) synthetic data generation, (ii) knowledge distillation, and (iii) virtual model retraining. We evaluate and benchmark LEOShot against the state of the art and the results show that it drastically expedites FL convergence by more than an order of magnitude. Also surprisingly, despite the one-shot nature, its model accuracy is on par with or even outperforms regular iterative FL schemes by a large margin

    LightESD: Fully-Automated and Lightweight Anomaly Detection Framework for Edge Computing

    Get PDF
    Anomaly Detection is Widely Used in a Broad Range of Domains from Cybersecurity to Manufacturing, Finance, and So On. Deep Learning based Anomaly Detection Has Recently Drawn Much Attention Because of its Superior Capability of Recognizing Complex Data Patterns and Identifying Outliers Accurately. However, Deep Learning Models Are Typically Iteratively Optimized in a Central Server with Input Data Gathered from Edge Devices, and Such Data Transfer between Edge Devices and the Central Server Impose Substantial overhead on the Network and Incur Additional Latency and Energy Consumption. to overcome This Problem, We Propose a Fully Automated, Lightweight, Statistical Learning based Anomaly Detection Framework Called LightESD. It is an On-Device Learning Method Without the Need for Data Transfer between Edge and Server and is Extremely Lightweight that Most Low-End Edge Devices Can Easily Afford with Negligible Delay, CPU/memory Utilization, and Power Consumption. Yet, It Achieves Highly Competitive Detection Accuracy. Another Salient Feature is that It Can Auto-Adapt to Probably Any Dataset Without Manually Setting or Configuring Model Parameters or Hyperparameters, which is a Drawback of Most Existing Methods. We Focus on Time Series Data Due to its Pervasiveness in Edge Applications Such as IoT. Our Evaluation Demonstrates that LightESD Outperforms Other SOTA Methods on Detection Accuracy, Efficiency, and Resource Consumption. Additionally, its Fully Automated Feature Gives It Another Competitive Advantage in Terms of Practical Usability and Generalizability

    Coordinated Container Migration and Base Station Handover in Mobile Edge Computing

    Full text link
    Offloading computationally intensive tasks from mobile users (MUs) to a virtualized environment such as containers on a nearby edge server, can significantly reduce processing time and hence end-to-end (E2E) delay. However, when users are mobile, such containers need to be migrated to other edge servers located closer to the MUs to keep the E2E delay low. Meanwhile, the mobility of MUs necessitates handover among base stations in order to keep the wireless connections between MUs and base stations uninterrupted. In this paper, we address the joint problem of container migration and base-station handover by proposing a coordinated migration-handover mechanism, with the objective of achieving low E2E delay and minimizing service interruption. The mechanism determines the optimal destinations and time for migration and handover in a coordinated manner, along with a delta checkpoint technique that we propose. We implement a testbed edge computing system with our proposed coordinated migration-handover mechanism, and evaluate the performance using real-world applications implemented with Docker container (an industry-standard). The results demonstrate that our mechanism achieves 30%-40% lower service downtime and 13%-22% lower E2E delay as compared to other mechanisms. Our work is instrumental in offering smooth user experience in mobile edge computing.Comment: 6 pages. Accepted for presentation at the IEEE Global Communications Conference (Globecom), Taipei, Taiwan, Dec. 202

    Cloud-Enhanced Robotic System for Smart City Crowd Control

    Get PDF
    Cloud robotics in smart cities is an emerging paradigm that enables autonomous robotic agents to communicate and collaborate with a cloud computing infrastructure. It complements the Internet of Things (IoT) by creating an expanded network where robots offload data-intensive computation to the ubiquitous cloud to ensure quality of service (QoS). However, offloading for robots is significantly complex due to their unique characteristics of mobility, skill-learning, data collection, and decision-making capabilities. In this paper, a generic cloud robotics framework is proposed to realize smart city vision while taking into consideration its various complexities. Specifically, we present an integrated framework for a crowd control system where cloud-enhanced robots are deployed to perform necessary tasks. The task offloading is formulated as a constrained optimization problem capable of handling any task flow that can be characterized by a Direct Acyclic Graph (DAG).We consider two scenarios of minimizing energy and time, respectively, and develop a genetic algorithm (GA)-based approach to identify the optimal task offloading decisions. The performance comparison with two benchmarks shows that our GA scheme achieves desired energy and time performance. We also show the adaptability of our algorithm by varying the values for bandwidth and movement. The results suggest their impact on offloading. Finally, we present a multi-task flow optimal path sequence problem that highlights how the robot can plan its task completion via movements that expend the minimum energy. This integrates path planning with offloading for robotics. To the best of our knowledge, this is the first attempt to evaluate cloud-based task offloading for a smart city crowd control system

    Sensor OpenFlow: Enabling Software-Defined Wireless Sensor Networks

    Get PDF

    Crowdsourcing with tullock contests: A new perspective

    Get PDF
    Incentive mechanisms for crowdsourcing have been extensively studied under the framework of all-pay auctions. Along a distinct line, this paper proposes to use Tullock contests as an alternative tool to design incentive mechanisms for crowdsourcing. We are inspired by the conduciveness of Tullock contests to attracting user entry (yet not necessarily a higher revenue) in other domains. In this paper, we explore a new dimension in optimal Tullock contest design, by superseding the contest prize - which is fixed in conventional Tullock contests - with a prize function that is dependent on the (unknown) winner\u27s contribution, in order to maximize the crowdsourcer\u27s utility. We show that this approach leads to attractive practical advantages: (a) it is well-suited for rapid prototyping in fully distributed web agents and smartphone apps; (b) it overcomes the disincentive to participate caused by players\u27 antagonism to an increasing number of rivals. Furthermore, we optimize conventional, fixed-prize Tullock contests to construct the most superior benchmark to compare against our mechanism. Through extensive evaluations, we show that our mechanism significantly outperforms the optimal benchmark, by over three folds on the crowdsourcer\u27s utility cum profit and up to nine folds on the players\u27 social welfare

    Two Novel 2x2 Models for MEMS-Based Optical Switches

    No full text
    The next generation all-optical IP network is calling for optical switching, among all implemental technologies of which optical MEMS seems to be the most promising candidate. However, the road leading to the pure optical world is not so smooth; complexity, reliability and scalability are among the most threatening challenges. This paper proposes two novel models for designing 2x2 MEMS-based optical switches, which surmount the obstacles with distinguishing features respectively. Optimizing the overall performance of optical switches, they are anticipated to play critical roles in the future all-optical networks and make those cost-effective optical switches eminent on the communications arena
    corecore